33 research outputs found
Convergence Analysis of Block Coordinate Algorithms with Determinantal Sampling
We analyze the convergence rate of the randomized Newton-like method
introduced by Qu et. al. (2016) for smooth and convex objectives, which uses
random coordinate blocks of a Hessian-over-approximation matrix \bM instead
of the true Hessian. The convergence analysis of the algorithm is challenging
because of its complex dependence on the structure of \bM. However, we show
that when the coordinate blocks are sampled with probability proportional to
their determinant, the convergence rate depends solely on the eigenvalue
distribution of matrix \bM, and has an analytically tractable form. To do so,
we derive a fundamental new expectation formula for determinantal point
processes. We show that determinantal sampling allows us to reason about the
optimal subset size of blocks in terms of the spectrum of \bM. Additionally,
we provide a numerical evaluation of our analysis, demonstrating cases where
determinantal sampling is superior or on par with uniform sampling
Sampling from a -DPP without looking at all items
Determinantal point processes (DPPs) are a useful probabilistic model for
selecting a small diverse subset out of a large collection of items, with
applications in summarization, stochastic optimization, active learning and
more. Given a kernel function and a subset size , our goal is to sample
out of items with probability proportional to the determinant of the kernel
matrix induced by the subset (a.k.a. -DPP). Existing -DPP sampling
algorithms require an expensive preprocessing step which involves multiple
passes over all items, making it infeasible for large datasets. A na\"ive
heuristic addressing this problem is to uniformly subsample a fraction of the
data and perform -DPP sampling only on those items, however this method
offers no guarantee that the produced sample will even approximately resemble
the target distribution over the original dataset. In this paper, we develop an
algorithm which adaptively builds a sufficiently large uniform sample of data
that is then used to efficiently generate a smaller set of items, while
ensuring that this set is drawn exactly from the target distribution defined on
all items. We show empirically that our algorithm produces a -DPP sample
after observing only a small fraction of all elements, leading to several
orders of magnitude faster performance compared to the state-of-the-art